TODO: check for grammar
After almost twelve years of studying music from the optics of a jazz musician, you could say that this has influenced my taste in music and musical analytic capacity by a significant amount. Playing in a bigband twice a week and simply being around a lot of jazz started to consume my being with regards to thinking, listening and of course playing. After a series of unfortunate events, the members of our bigband decided to part ways and around this time I made the most important decision to start paying for a Spotify account. This marks the start of a — heavily influenced by jazz — liked playlist.
Two years go by and the decision is made to go on exchange and have the time of my life. In this period where I was supposed to be studying, I learned to appreciate going out and clubbing with a specific love for minimal techno and deep house music. These styles of music are repetitive and not always the most complex, which wouldn’t have made my heart beat faster in the past. Even so, this new found appreciation for a new style of music must have changed the mean characteristics of a song present in my Spotify liked playlist by a lot.
Because I am aware of this shifting trend in my own music taste, I hope to see this reflected when looking at a deeper analysis of my Spotify liked playlist by using the Spotify API. Since an account’s liked playlist is the only playlist which can not be made public, I’ve had to use a workaround and export my liked playlist as a csv file through a service called Skiley. I could not copy and paste my liked songs to a regular Spotify playlist and export them that way, since the dateAdded feature wouldn’t be preserved that way, which is a most-important feature when performing an analysis over time. The resulting corpus exists of 1338 songs, accumulated over two and a half years of joyous listening. Next to corpus wide analysis, we’ll also look at three track-specific analyses. Fjords by Peter Guidi is a slow jazz balad with complex melodies and a lead played by the saxophone. Secondly, I’ll take a look at Rage by DBBD, which is a fast-paced trance song, specifically made to dance to. It contains monotonous melody and chord progression, an elevated bpm and should be distincly different to Fjords when analysing these features. Lastly I thought it’d be good to include as close to an intermediate between these two genres, as possible, so I’ve include My favorite things by Outkast, which is a cover of the jazz classic written by Richard Rodgers and famously played by John Coltrane and is in this case played in a style which resembles something closer to jungle and techno.
When looking at the timbre-based self similarity matrices, we can see very distinct differences between the three tracks. Where Rage shows very distinct changes in timbre features, depicting the different sections and mainly breaks in music, these changes are not as clearly visible in My Favorite Things and Fjords. A reason for this could be that My Favorite things mainly consists of percussion, which overshadows and overpowers the values in terms of timbre and results in non-distinct sections. Where My Favorite Things lacks in sections, Fjords makes up for it with a chaotic similarity matrix. This could be caused by the bar-based differences in interaction between instruments. We do see vaguely lighter and darker squared sections, which indicate the sax, piano and lastly the drum solo’s.
When taking a look at the cepstrograms for these songs we again see a lot of variety for jazz, more distinctive c02 timbre for techno during the ‘breaks’ and an overwhelmingly big presence of c01, which resembles loudness, for the whole duration of My Favorite Things, which is probably caused by the avid use of percussion.
When analysing the chordograms for these tracks, some assumptions I made beforehand are proven wrong. Firstly I thought that since Fjords by Peter Guidi has a very traditional jazz quartet occupancy woth a rhythm section consisting of percussion, piano and bass, the chordogram would come out very clear. This holds for all parts of this song where the theme and chord schema are played, but chords can not be recognized during the three solos between ~100 and ~400 seconds into the tune, even though the rhythm section is still playing from the same chord schema. In these sections, the chords are played with a more musical interpretation than during the theme and bridge, which could result in more difficulty recognizing chords during analysis. For Rage I assumed the chordogram to lack in chord recognition due to the monotonous and non-harmonic features of the song, but it performed better than expected. There is a clear break noticible at the mid-point of the track, which corresponds to a break in bass usage in the song. Therefore we could conclude that the bass in techno has some importance in recognizing chords. Sadly the chordogram for My Favorite Things did not match any chords due to percussion overpowering any type of harmonics, and therefore I have omitted this graphic.
When interpreting the tempograms for Fjords, My Favorite Things and Rage, we notice steady BPM for the latter two and a lot of variation for the jazz piece. This points to a wider spectrum of temporal freedom in jazz, which is of course also credited towards the fact that this recording is done in one go without a metronome present. In this recording of Fjords, all musicians react in real time to each others play and therefore we see a larger variance in BPM. Good to note is the fact that Rage is presumed to be produced in an electronic setting, where tempo is strictly static and music is not played live.
In our whole corpus are no covers of the same song present, except for these two different versions of Flat Beat by Mr. Oizo. The original and the radio edit are incredibly similar in terms of length, but there does exist a difference in song structure, namely the lack of a long intro in the radio edit. This is just a shortened version of the original intro though, and the rest of both songs match up in terms of structure and tempo.
The chromagrams for these three tracks indicate the same trend as analyses in previous tabs: When moving further on the spectrum from jazz to techno, we lose complexity, but gain distinct sections of repetition.
TODO: write about the changes over time in tempo and such
How wrong were my assumptions?
When looking back at this analysis of jazz, techno and *the in between*, I noticed a couple of trends which largely correspond to initial assumptions.
*Fjords* was more complex in almost all analyses, where complexity
---
title: "new corpus compMusic"
output:
flexdashboard::flex_dashboard:
# orientation: columns
storyboard: true
social: menu
source: embed
date: "2024-02-23"
# editor_options:
# markdown:
# wrap: sentence
---
```{r setup, include=FALSE}
# knitr::opts_chunk$set(echo = TRUE)
library(flexdashboard)
```
```{r, echo=FALSE}
library(tidyverse)
library(spotifyr)
library(ggplot2)
library(compmus)
library(tidymodels)
library(ggdendro)
library(heatmaply)
library("ggpubr")
theme_set(
theme_bw()
# theme(legend.position = "top")
)
df <- read_csv(
"Liked Songs.csv",
show_col_types = FALSE
) %>%
subset(select = -c(
isLocal,
isLikedByUser,
trackIsrc,
trackUrl,
artistUrl,
albumUrl,
albumUpc,
albumType,
addedBy
)) %>%
arrange(addedAt)
df_stats_global <- df %>%
summarise(
mean_speechiness = mean(trackFeatureSpeechiness),
mean_acousticness = mean(trackFeatureAcousticness),
mean_liveness = mean(trackFeatureLiveness),
sd_speechiness = sd(trackFeatureSpeechiness),
sd_acousticness = sd(trackFeatureAcousticness),
sd_liveness = sd(trackFeatureLiveness),
median_speechiness = median(trackFeatureSpeechiness),
median_acousticness = median(trackFeatureAcousticness),
median_liveness = median(trackFeatureLiveness),
mad_speechiness = mad(trackFeatureSpeechiness),
mad_acousticness = mad(trackFeatureAcousticness),
mad_liveness = mad(trackFeatureLiveness)
)
# flangestab_raw <- get_tidy_audio_analysis("0zW8R4isqSoK0NdedBnJ80")
#
flatbeat_raw <- get_tidy_audio_analysis("7GP1ZmPGH1puBliT9S6Fi5")
flatbeat_raw_radioedit <- get_tidy_audio_analysis("5jaVyz2GDdesyu01cBbOSc")
fjords_raw <- get_tidy_audio_analysis("03MeuHSDwaWnwTnignD6S9")
favorite_things_raw <- get_tidy_audio_analysis("2JIDQilIIxsxwZaS5xz8Av")
rage_raw <- get_tidy_audio_analysis("3yKZmnS4wamdGOP3BXpF3G")
get_conf_mat <- function(fit) {
outcome <- .get_tune_outcome_names(fit)
fit |>
collect_predictions() |>
conf_mat(truth = outcome, estimate = .pred_class)
}
get_pr <- function(fit) {
fit |>
conf_mat_resampled() |>
group_by(Prediction) |> mutate(precision = Freq / sum(Freq)) |>
group_by(Truth) |> mutate(recall = Freq / sum(Freq)) |>
ungroup() |> filter(Prediction == Truth) |>
select(class = Prediction, precision, recall)
}
circshift <- function(v, n) {
if (n == 0) v else c(tail(v, n), head(v, -n))
}
# C C# D Eb E F F# G Ab A Bb B
major_chord <-
c( 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0)
minor_chord <-
c( 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0)
seventh_chord <-
c( 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0)
major_key <-
c(6.35, 2.23, 3.48, 2.33, 4.38, 4.09, 2.52, 5.19, 2.39, 3.66, 2.29, 2.88)
minor_key <-
c(6.33, 2.68, 3.52, 5.38, 2.60, 3.53, 2.54, 4.75, 3.98, 2.69, 3.34, 3.17)
chord_templates <-
tribble(
~name, ~template,
"Gb:7", circshift(seventh_chord, 6),
"Gb:maj", circshift(major_chord, 6),
"Bb:min", circshift(minor_chord, 10),
"Db:maj", circshift(major_chord, 1),
"F:min", circshift(minor_chord, 5),
"Ab:7", circshift(seventh_chord, 8),
"Ab:maj", circshift(major_chord, 8),
"C:min", circshift(minor_chord, 0),
"Eb:7", circshift(seventh_chord, 3),
"Eb:maj", circshift(major_chord, 3),
"G:min", circshift(minor_chord, 7),
"Bb:7", circshift(seventh_chord, 10),
"Bb:maj", circshift(major_chord, 10),
"D:min", circshift(minor_chord, 2),
"F:7", circshift(seventh_chord, 5),
"F:maj", circshift(major_chord, 5),
"A:min", circshift(minor_chord, 9),
"C:7", circshift(seventh_chord, 0),
"C:maj", circshift(major_chord, 0),
"E:min", circshift(minor_chord, 4),
"G:7", circshift(seventh_chord, 7),
"G:maj", circshift(major_chord, 7),
"B:min", circshift(minor_chord, 11),
"D:7", circshift(seventh_chord, 2),
"D:maj", circshift(major_chord, 2),
"F#:min", circshift(minor_chord, 6),
"A:7", circshift(seventh_chord, 9),
"A:maj", circshift(major_chord, 9),
"C#:min", circshift(minor_chord, 1),
"E:7", circshift(seventh_chord, 4),
"E:maj", circshift(major_chord, 4),
"G#:min", circshift(minor_chord, 8),
"B:7", circshift(seventh_chord, 11),
"B:maj", circshift(major_chord, 11),
"D#:min", circshift(minor_chord, 3)
)
key_templates <-
tribble(
~name, ~template,
"Gb:maj", circshift(major_key, 6),
"Bb:min", circshift(minor_key, 10),
"Db:maj", circshift(major_key, 1),
"F:min", circshift(minor_key, 5),
"Ab:maj", circshift(major_key, 8),
"C:min", circshift(minor_key, 0),
"Eb:maj", circshift(major_key, 3),
"G:min", circshift(minor_key, 7),
"Bb:maj", circshift(major_key, 10),
"D:min", circshift(minor_key, 2),
"F:maj", circshift(major_key, 5),
"A:min", circshift(minor_key, 9),
"C:maj", circshift(major_key, 0),
"E:min", circshift(minor_key, 4),
"G:maj", circshift(major_key, 7),
"B:min", circshift(minor_key, 11),
"D:maj", circshift(major_key, 2),
"F#:min", circshift(minor_key, 6),
"A:maj", circshift(major_key, 9),
"C#:min", circshift(minor_key, 1),
"E:maj", circshift(major_key, 4),
"G#:min", circshift(minor_key, 8),
"B:maj", circshift(major_key, 11),
"D#:min", circshift(minor_key, 3)
)
```
Corpus
=========================================
TODO: check for grammar
#### Out with the old: start of a beatiful thing
After almost twelve years of studying music from the optics of a jazz musician, you could say that this has influenced my taste in music and musical analytic capacity by a significant amount.
Playing in a bigband twice a week and simply being around a lot of jazz started to consume my being with regards to thinking, listening and of course playing.
After a series of unfortunate events, the members of our bigband decided to part ways and around this time I made the most important decision to start paying for a Spotify account.
This marks the start of a --- heavily influenced by jazz --- liked playlist.
#### Changes in life and music
Two years go by and the decision is made to go on exchange and have the time of my life.
In this period where I was supposed to be studying, I learned to appreciate going out and clubbing with a specific love for minimal techno and deep house music.
These styles of music are repetitive and not always the most complex, which wouldn't have made my heart beat faster in the past.
Even so, this new found appreciation for a new style of music must have changed the mean characteristics of a song present in my Spotify liked playlist by a lot.
#### Analyse what you like
Because I am aware of this shifting trend in my own music taste, I hope to see this reflected when looking at a deeper analysis of my Spotify liked playlist by using the Spotify API. Since an account's liked playlist is the only playlist which can not be made public, I've had to use a workaround and export my liked playlist as a csv file through a service called *Skiley*.
I could not copy and paste my liked songs to a regular Spotify playlist and export them that way, since the *dateAdded* feature wouldn't be preserved that way, which is a most-important feature when performing an analysis over time.
The resulting corpus exists of 1338 songs, accumulated over two and a half years of joyous listening.
Next to corpus wide analysis, we'll also look at three track-specific analyses.
*Fjords* by Peter Guidi is a slow jazz balad with complex melodies and a lead played by the saxophone.
Secondly, I'll take a look at *Rage* by DBBD, which is a fast-paced trance song, specifically made to dance to.
It contains monotonous melody and chord progression, an elevated bpm and should be distincly different to *Fjords* when analysing these features.
Lastly I thought it'd be good to include as close to an intermediate between these two genres, as possible, so I've include *My favorite things* by Outkast, which is a cover of the jazz classic written by Richard Rodgers and famously played by John Coltrane and is in this case played in a style which resembles something closer to jungle and techno.
```{r, include=FALSE}
sort(colnames(df))
# sapply(df, class)
```
From *Rage* to *My favorite things* {.storyboard}
=========================================
### structural differences
```{r}
rage <- rage_raw |>
compmus_align(bars, segments) |> # Change `bars`
select(bars) |> # in all three
unnest(bars) |> # of these lines.
mutate(
pitches =
map(segments,
compmus_summarise, pitches,
method = "mean", norm = "manhattan" # Change summary & norm.
)
) |>
mutate(
timbre =
map(segments,
compmus_summarise, timbre,
method = "mean", norm = "manhattan" # Change summary & norm.
)
)
rage_plot <- rage |>
compmus_self_similarity(timbre, "cosine") |>
ggplot(
aes(
x = xstart + xduration / 2,
width = xduration,
y = ystart + yduration / 2,
height = yduration,
fill = d
)
) +
geom_tile() +
coord_fixed() +
scale_fill_viridis_c(guide = "none") +
theme_classic() +
labs(title="Rage", x = NULL, y = NULL)
favorite_things <- favorite_things_raw |>
compmus_align(bars, segments) |> # Change `bars`
select(bars) |> # in all three
unnest(bars) |> # of these lines.
mutate(
pitches =
map(segments,
compmus_summarise, pitches,
method = "mean", norm = "manhattan" # Change summary & norm.
)
) |>
mutate(
timbre =
map(segments,
compmus_summarise, timbre,
method = "mean", norm = "manhattan" # Change summary & norm.
)
)
favorite_plot <- favorite_things |>
compmus_self_similarity(timbre, "cosine") |>
ggplot(
aes(
x = xstart + xduration / 2,
width = xduration,
y = ystart + yduration / 2,
height = yduration,
fill = d
)
) +
geom_tile() +
coord_fixed() +
scale_fill_viridis_c(guide = "none") +
theme_classic() +
labs(title="My Favorite Things", x = NULL, y = NULL)
fjords <- fjords_raw |>
compmus_align(bars, segments) |> # Change `bars`
select(bars) |> # in all three
unnest(bars) |> # of these lines.
mutate(
pitches =
map(segments,
compmus_summarise, pitches,
method = "mean", norm = "manhattan" # Change summary & norm.
)
) |>
mutate(
timbre =
map(segments,
compmus_summarise, timbre,
method = "mean", norm = "manhattan" # Change summary & norm.
)
)
fjords_plot <- fjords |>
compmus_self_similarity(timbre, "cosine") |>
ggplot(
aes(
x = xstart + xduration / 2,
width = xduration,
y = ystart + yduration / 2,
height = yduration,
fill = d
)
) +
geom_tile() +
coord_fixed() +
scale_fill_viridis_c(guide = "none") +
theme_classic() +
labs(title = "Fjords", x = NULL, y = NULL)
fjords_plot2 <- fjords |>
compmus_gather_timbre() |>
ggplot(
aes(
x = start + duration / 2,
width = duration,
y = basis,
fill = value
)
) +
geom_tile() +
labs(x = NULL, y = NULL, fill = NULL) +
scale_fill_viridis_c(guide = "none") +
theme_classic()
favorite_plot2 <- favorite_things |>
compmus_gather_timbre() |>
ggplot(
aes(
x = start + duration / 2,
width = duration,
y = basis,
fill = value
)
) +
geom_tile() +
labs(x = NULL, y = NULL, fill = NULL) +
scale_fill_viridis_c(guide = "none") +
theme_classic()
rage_plot2 <- rage |>
compmus_gather_timbre() |>
ggplot(
aes(
x = start + duration / 2,
width = duration,
y = basis,
fill = value
)
) +
geom_tile() +
labs(x = NULL, y = NULL, fill = NULL) +
scale_fill_viridis_c(guide = "none") +
theme_classic()
figure <- ggarrange(fjords_plot, favorite_plot, rage_plot, fjords_plot2, favorite_plot2, rage_plot2,
common.legend = TRUE,
ncol = 3, nrow = 2)
annotate_figure(figure,
# top = text_grob("Visualizing Tooth Growth", color = "red", face = "bold", size = 14),
left = text_grob("cepstrogram | timbre self-simil", color = "black", rot = 90),
bottom = text_grob("Time (s)", color = "black", size=10)
# fig.lab = "Figure 1", fig.lab.face = "bold"
)
#figure
```
***
When looking at the timbre-based self similarity matrices, we can see very distinct differences between the three tracks.
Where *Rage* shows very distinct changes in timbre features, depicting the different sections and mainly breaks in music, these changes are not as clearly visible in *My Favorite Things* and *Fjords*.
A reason for this could be that *My Favorite things* mainly consists of percussion, which overshadows and overpowers the values in terms of timbre and results in non-distinct sections.
Where *My Favorite Things* lacks in sections, *Fjords* makes up for it with a chaotic similarity matrix.
This could be caused by the bar-based differences in interaction between instruments.
We do see vaguely lighter and darker squared sections, which indicate the sax, piano and lastly the drum solo's.
***
When taking a look at the cepstrograms for these songs we again see a lot of variety for jazz, more distinctive c02 timbre for techno during the 'breaks' and an overwhelmingly big presence of c01, which resembles loudness, for the whole duration of *My Favorite Things*, which is probably caused by the avid use of percussion.
### Chordographic analysis
```{r}
fjords <- fjords_raw |>
compmus_align(sections, segments) |>
select(sections) |>
unnest(sections) |>
mutate(
pitches =
map(segments,
compmus_summarise, pitches,
method = "mean", norm = "manhattan"
)
)
fjords_plot <- fjords |>
compmus_match_pitch_template(
key_templates, # Change to chord_templates if descired
method = "euclidean", # Try different distance metrics
norm = "manhattan" # Try different norms
) |>
ggplot(
aes(x = start + duration / 2, width = duration, y = name, fill = d)
) +
geom_tile() +
scale_fill_viridis_c(guide = "none") +
theme_minimal() +
labs(title = "Fjords", x = "", y = "")
favorite_things <- favorite_things_raw |>
compmus_align(sections, segments) |>
select(sections) |>
unnest(sections) |>
mutate(
pitches =
map(segments,
compmus_summarise, pitches,
method = "mean", norm = "manhattan"
)
)
favorite_plot <- favorite_things |>
compmus_match_pitch_template(
key_templates, # Change to chord_templates if descired
method = "euclidean", # Try different distance metrics
norm = "manhattan" # Try different norms
) |>
ggplot(
aes(x = start + duration / 2, width = duration, y = name, fill = d)
) +
geom_tile() +
scale_fill_viridis_c(guide = "none") +
theme_minimal() +
labs(title = "My Favorite Things", x = "", y = "")
rage <- rage_raw |>
compmus_align(sections, segments) |>
select(sections) |>
unnest(sections) |>
mutate(
pitches =
map(segments,
compmus_summarise, pitches,
method = "mean", norm = "manhattan"
)
)
rage_plot <- rage |>
compmus_match_pitch_template(
key_templates, # Change to chord_templates if descired
method = "euclidean", # Try different distance metrics
norm = "manhattan" # Try different norms
) |>
ggplot(
aes(x = start + duration / 2, width = duration, y = name, fill = d)
) +
geom_tile() +
scale_fill_viridis_c(guide = "none") +
theme_minimal() +
labs(title = "Rage", x = "", y = "")
figure <- ggarrange(fjords_plot, rage_plot,
common.legend = TRUE,
ncol = 2, nrow = 1)
annotate_figure(figure,
# top = text_grob("Visualizing Tooth Growth", color = "red", face = "bold", size = 14),
left = text_grob("chordogram", color = "black", rot = 90),
bottom = text_grob("Time (s)", color = "black", size=10)
# fig.lab = "Figure 1", fig.lab.face = "bold"
)
# figure
```
***
When analysing the chordograms for these tracks, some assumptions I made beforehand are proven wrong.
Firstly I thought that since *Fjords* by Peter Guidi has a very traditional jazz quartet occupancy woth a rhythm section consisting of percussion, piano and bass, the chordogram would come out very clear.
This holds for all parts of this song where the theme and chord schema are played, but chords can not be recognized during the three solos between ~100 and ~400 seconds into the tune, even though the rhythm section is still playing from the same chord schema.
In these sections, the chords are played with a more musical interpretation than during the theme and bridge, which could result in more difficulty recognizing chords during analysis.
For *Rage* I assumed the chordogram to lack in chord recognition due to the monotonous and non-harmonic features of the song, but it performed better than expected.
There is a clear break noticible at the mid-point of the track, which corresponds to a break in bass usage in the song.
Therefore we could conclude that the bass in techno has some importance in recognizing chords.
Sadly the chordogram for *My Favorite Things* did not match any chords due to percussion overpowering any type of harmonics, and therefore I have omitted this graphic.
### Fourrier-based tempogram
```{r, eval=FALSE}
fjords_plot <- fjords_raw |>
tempogram(window_size = 8, hop_size = 1, cyclic = TRUE) |>
ggplot(aes(x = time, y = bpm, fill = power)) +
geom_raster() +
scale_fill_viridis_c(guide = "none") +
labs(title = "Fjords", x = NULL, y = NULL) +
theme_classic()
```
```{r}
favorite_plot <- favorite_things_raw |>
tempogram(window_size = 8, hop_size = 1, cyclic = TRUE) |>
ggplot(aes(x = time, y = bpm, fill = power)) +
geom_raster() +
scale_fill_viridis_c(guide = "none") +
labs(title = "My Favorite Things", x = NULL, y = NULL) +
theme_classic()
```
```{r}
rage_plot <- rage_raw |>
tempogram(window_size = 8, hop_size = 1, cyclic = TRUE) |>
ggplot(aes(x = time, y = bpm, fill = power)) +
geom_raster() +
scale_fill_viridis_c(guide = "none") +
labs(title = "Rage", x = NULL, y = NULL) +
theme_classic()
```
```{r}
figure <- ggarrange(fjords_plot, favorite_plot, rage_plot,
common.legend = TRUE,
ncol = 2, nrow = 2)
annotate_figure(figure,
top = text_grob("Fourrier based Tempogram", color = "red", face = "bold", size = 14),
left = text_grob("BPM", color = "black", rot = 90),
bottom = text_grob("Time (s)", color = "black", size=10)
# fig.lab = "Figure 1", fig.lab.face = "bold"
)
```
***
When interpreting the tempograms for *Fjords*, *My Favorite Things* and *Rage*, we notice steady BPM for the latter two and a lot of variation for the jazz piece.
This points to a wider spectrum of temporal freedom in jazz, which is of course also credited towards the fact that this recording is done in one go without a metronome present.
In this recording of *Fjords*, all musicians react in real time to each others play and therefore we see a larger variance in BPM.
Good to note is the fact that *Rage* is presumed to be produced in an electronic setting, where tempo is strictly static and music is not played live.
### Dynamic Time Warping
```{r, eval=FALSE}
flatbeat_original <- flatbeat_raw |>
select(segments) |>
unnest(segments) |>
select(start, duration, pitches)
flatbeat_radioedit <- flatbeat_raw_radioedit |>
select(segments) |>
unnest(segments) |>
select(start, duration, pitches)
compmus_long_distance(
flatbeat_original |> mutate(pitches = map(pitches, compmus_normalise, "chebyshev")),
flatbeat_radioedit |> mutate(pitches = map(pitches, compmus_normalise, "chebyshev")),
feature = pitches,
method = "euclidean"
) |>
ggplot(
aes(
x = xstart + xduration / 2,
width = xduration,
y = ystart + yduration / 2,
height = yduration,
fill = d
)
) +
geom_tile() +
coord_equal() +
labs(title="Dynamic timewarping for *Flat Beat* by Mr. Oizo", x = "Original - Time (s)", y = "Radio edit - Time (s)") +
theme_minimal() +
scale_fill_viridis_c(guide = NULL)
```
***
In our whole corpus are no covers of the same song present, except for these two different versions of *Flat Beat* by Mr. Oizo.
The original and the radio edit are incredibly similar in terms of length, but there does exist a difference in song structure, namely the lack of a long intro in the radio edit.
This is just a shortened version of the original intro though, and the rest of both songs match up in terms of structure and tempo.
### Chroma Features
```{r, eval=FALSE}
fjords <- fjords_raw |>
select(segments) |>
unnest(segments) |>
select(start, duration, pitches)
fjords_plot <- fjords |>
mutate(pitches = map(pitches, compmus_normalise, "euclidean")) |>
compmus_gather_chroma() |>
ggplot(
aes(
x = start + duration / 2,
width = duration,
y = pitch_class,
fill = value
)
) +
geom_tile() +
labs(title = "Fjords", x = NULL, y = NULL, fill = NULL) +
theme_minimal() +
scale_fill_viridis_c()
favorite_things <- favorite_things_raw |>
select(segments) |>
unnest(segments) |>
select(start, duration, pitches)
favorite_plot <- favorite_things |>
mutate(pitches = map(pitches, compmus_normalise, "euclidean")) |>
compmus_gather_chroma() |>
ggplot(
aes(
x = start + duration / 2,
width = duration,
y = pitch_class,
fill = value
)
) +
geom_tile() +
labs(title = "My Favorite Things", x = NULL, y = NULL, fill = NULL) +
theme_minimal() +
scale_fill_viridis_c()
rage <- rage_raw |>
select(segments) |>
unnest(segments) |>
select(start, duration, pitches)
rage_plot <- rage |>
mutate(pitches = map(pitches, compmus_normalise, "euclidean")) |>
compmus_gather_chroma() |>
ggplot(
aes(
x = start + duration / 2,
width = duration,
y = pitch_class,
fill = value
)
) +
geom_tile() +
labs(title = "Rage", x = NULL, y = NULL, fill = NULL) +
theme_minimal() +
scale_fill_viridis_c()
figure <- ggarrange(fjords_plot, favorite_plot, rage_plot,
common.legend = TRUE,
ncol = 2, nrow = 2)
annotate_figure(figure,
top = text_grob("Euclidean Chromagram", color = "red", face = "bold", size = 14),
left = text_grob("frequence", color = "black", rot = 90),
bottom = text_grob("Time (s)", color = "black", size=10)
# fig.lab = "Figure 1", fig.lab.face = "bold"
)
```
***
The chromagrams for these three tracks indicate the same trend as analyses in previous tabs:
When moving further on the spectrum from jazz to techno, we lose complexity, but gain distinct sections of repetition.
### The whole corpus in tempo
```{r}
plt <- df |>
ggplot(
aes(
x = addedAt,
y = trackFeatureTempo,
color = trackFeatureDanceability,
)
) +
scale_colour_continuous(guide = "none") +
labs(title="Track tempo over time in liked playlist\nperiod of exchange marked in red", x = "Date track added to playlist", y = "Track tempo in bpm", ) +
geom_point(size = 1) +
geom_smooth(method = "gam") +
geom_vline(xintercept=as.numeric(df$addedAt[295]), color="red", linetype=4) +
geom_vline(xintercept=as.numeric(df$addedAt[635]), color="red", linetype=4)
plt
```
***
TODO: write about the changes over time in tempo and such
```
How wrong were my assumptions?
When looking back at this analysis of jazz, techno and *the in between*, I noticed a couple of trends which largely correspond to initial assumptions.
*Fjords* was more complex in almost all analyses, where complexity
```